Search Results: "mario"

5 May 2015

Miriam Ruiz: SuperTuxKart 0.9: The other side of the story

I approached the SuperTuxKart community fearing some backslash due to last week s discussion about their release 0.9, to find instead a nice, friendly and welcoming community. I have already had some very nice talks with them since then, and they have patiently explained to me the sequence of events that led to the situation that I mentioned and that, for the sake of fairness, I consider that I have to share here too. You can read the log of the first conversation I had with them (the log has been edited and cleared up for clarity and readability). I seriously recommend reading it, it s a honest friendly conversation, and it s first hand. For those who don t already know the game:

All this story seems to start with the complain of a 6 yo girl, close relative of one of the developers and STK user, who explained that she always felt that Mario Kart was better because there was a princess in it. I m not particularly happy with princesses as role models for girls, but one thing I have always said is that we have to listen to kids and take their opinions into accounts, and I know that if I had such a request from one of the kids closer to me, I probably would have fulfilled it too. In any case, Free Software projects based on volunteer work are essentially a do-ocracy and it is assumed that whoever does the work, gets to decide about it.

So that is how Princess Sara was added to the game. While developing it, I was assured that they took extra care that her proportions were somehow realistic, and not as distorted as we re used to see in Barbie or many Disney films. Sara is inspired on an OpenGameArt s wizard and is not supposed to be a weak damsel in distress, but in fact a powerful character in the world s universe.

Sara is not the only female character playable. There are a few others: Suzanne (a monkey, Blender s mascot), Xue (XFCE s mouse) and Amanda (a panda, the mascot of windows maker). Sara happens to be the only human character playable, male or female. While it has been argued that by adding that character, a player might have the impression that the rest of the characters would be male by default, I have been told that the intention is exactly the opposite,and that the fact that the only human playable character in the game is female should make it more attractive to girls. To some, at least. Here are some images of Sara:

So the fact is that they have invested a lot of time in developing Sara s model. I m not an artist myself, so I don t know first hand how much time and effort it takes to make such a model, but in any case it seems that quite a lot. When they designed the beach track Gran Paradiso, they wanted to add people to the beach. That track is, in fact, inspired on a real existing place: Princess Juliana Airport. Time was over and they wanted to publish a version with what they already had, so they used Sara s model in a bikini on the beach, with the intention of adding more people, male and female, later. The overall view of the beach would be:

This is how that track shows when the players are driving in it:

Now, about the poster of version 0.9, it is supposed to be inspired in the previous poster of version 0.8.1, only this time inspired in Carnival (which is, in fact, a celebration in which sexualization of both genders is a core part). I know that there are accusations of cultural apropriation, but I couldn t know, as my white privilege probably shields me from seeing that. Up to now, no one has said anything about that, only Gunnar explaining his point of view as a non-native mexican: While the poster does not strike as the most cautious possible, I do not see it as culturally offensive. It does not attempt to set a scene portraiting what were the cultures really like; the portrait it paints is similar to so many fantasy recreations . In my opinion, even when the model is done in good taste, with no superbig breasts and no unrealistic waist, it s still depicting a girl without much clothes as the main element of the scene, with an attire, a posture and an attitude that clearly resembles carnival and, thus, inevitably conveys a message of sexualization. Even though I can t deny that it s a cute poster, it s one I wouldn t be happy to see for example in a school, if someone wanted to promote the game there. The author of the poster, anyway, tells me that he had a totally different intention when doing it, and he wanted to depict a powerful princess, in the center of SuperTuxKart s universe, celebrating the new engine.

About the panties showing every now and then, I ve been told that it s something so hard to see that in fact you would really have to open the model itself to view them. I m not saying that I like them though, I think it would have been better if Sara would have had short pants under the skirt, if she was going to drive the snowmobile with a dress, but I m not sure if that s something important enough to condemn the game. The original girl mentioned at the beginning of this post seems to have found the animation funny, started laughing, and said that Sara is very silly, and that was all. It s probably something more silly than naughty, I guess. Even though, as I said, it s something I don t like too much. I don t have to agree with STK developers in everything. I guess.

There s one thing I would like to highlight about my conversations with the developers of SuperTuxKart, though. I like them. They seem to be as concerned about the wellbeing of kids as I am, they have their own ethic norms of what s acceptable and what s not, and they want to do something to be proud of. Sometimes, many of these conflicts arise from a lack of trust. When I first saw the screenshots with the girl in bikini and the panties showing, I was honestly concerned about the direction the project was taking. After having talked with the developers, I am more calmed about it, because they seem to have their heart in the right place, they care, they are motivated and they work hard. I don t know if a princess would be my first choice for a main female character, but at least their intention seems to be to give some girls a sensible role model in the game with who they can identify.

1 May 2015

Miriam Ruiz: Sexualized depiction of women in SuperTuxKart 0.9

It has been recently discussed in Debian-Women and Debian-Games mailing lists, but for all of you who don t read those mailing lists and might have kids or use free games with kids in the classroom, or stuff like that, I thought it might be good to talk about it here. SuperTuxKart is a free 3D kart racing game, similar to Mario Kart, with a focus on having fun over realism. The characters in the game are the mascots of free and open source projects, except for Nolok, who does not represent a particular open source project, but was created by the SuperTux Game Team as the enemy of Tux. On April 21, 2015, version 0.9 (not yet in Debian) was released which used the Antarctica graphics engine (a derivative of Irrlicht) and enabled better graphics appearance and features such as dynamic lighting, ambient occlusion, depth of field, and global illumination. Along with this new engine comes a poster with a sexualized white woman is wearing an outfit that can be depicted as a mix of Native american clothes from different nation and a halo of feathers, as well as many models of her in a bikini swim suit, all along the game, even in the hall of the airport. They say an image is worth more than a thousand words, don t they?

14 April 2015

Mario Lang: Bjarne Stoustrup talking about organisations that can raise expectations

At time index 22:35, Bjarne Stroustrup explains in this video what he thinks is very special about organisatrions like Cambridge or Bell Labs. When I just heard him explain this, I couldn't help but think of Debian. This is exactly how I felt (and actually still do) when I joined Debian as a Developer in 2002. This is, what makes Debian, amongst other things, very special to me. If you don't want to watch the video, here is the excerpt I am talking about:
One of the things that Cambridge could do, and later Bell Labs could do, is somehow raise peoples expectations of themselves. Raise the level that is considered acceptable. You walk in and you see what people are doing, you see how people are doing, you see how apparently easily they do it, and you see how nice they are while doing it, and you realize, I better sharpen up my game. This is something where you have to, you just have to get better. Because, what is acceptable has changed. And some organisations can do that, and well, most can't, to that extent. And I am very very lucky to be in a couple places that actually can increase your level of ambition, in some sense, level of what is a good standard.

9 April 2015

Mario Lang: A C++ sample collection

I am one of those people that best learns from looking at examples. No matter if I am trying to learn a programming pattern/idiom, or a completely new library or framework. Documentation is good (if it is good!) for diving into the details, but to get me started, I always want to look at a self contained example so that I can get a picture of the thing in my head. So I was very excited when a few days ago, CppSamples was announced on the ISO C++ Blog. While it is a very young site, it already contains some very useful gems. It is maintained over at GitHub, so it is also rather easy to suggest new additions, or improve the existing examples by submitting a pull request. Give it a try, it is really quite nice. In my book, the best resource I have found so far in 2015. BTW, Debian has a standard location for finding examples provided by a package. It is /usr/share/doc/<package>/examples/. I consider that very useful.

7 April 2015

Mario Lang: I am sorry, but this looks insane

I am a console user. I really just started to use X11 again about two weeks ago, to sometimes test an a Qt application I am developing. I am not using Firefox or anything similar, all my daily work happens in shells and inside of emacs, in a console, not in X11. BRLTTY runs all the time, translating the screen content to something that my braille display can understand, sent out via USB. So the most important programs to me, are really emacs, and brltty. This is my desktop, that is up since 179 days.
PID   USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1227 message+  20   0    7140   2860    672 S   0,0  0,1 153:33.10 dbus-daemon
21457 root      20   0   44456   1116    788 S   0,0  0,1 146:42.47 packagekitd
    1 root      20   0   24348   2808   1328 S   0,0  0,1 109:16.99 systemd
 7897 mlang     20   0  585776 121656   4188 S   0,0  6,0 105:22.40 emacs
13332 root      20   0   10744    432    220 S   0,0  0,0  91:55.96 ssh
19581 root      20   0    4924   1632   1076 S   0,0  0,1  53:33.56 systemd
19596 root      20   0   20312   9764   9660 S   0,0  0,5  48:10.76 systemd-journal
10172 root      20   0   85308   2472   1672 S   0,0  0,1  20:30.18 NetworkManager
   29 root      20   0       0      0      0 S   0,0  0,0  18:40.24 kswapd0
13334 root      20   0  120564   5748    304 S   0,0  0,3  16:20.89 sshfs
    7 root      20   0       0      0      0 S   0,0  0,0  15:21.15 rcu_sched
14245 root      20   0    7620    316    152 S   0,0  0,0  15:08.64 ssh
  438 root      20   0       0      0      0 S   0,0  0,0  12:14.80 jbd2/dm-1-8
11952 root      10 -10   42968   2028   1420 S   0,0  0,1  10:36.20 brltty
I am sorry, but this doesn't look right, not at all. I am not even beginning to talk about dbus-daemon and systemd. Why the HECK does packagekitd (which I definitely don't use actively) use up more then two hours of plain CPU time? What did it do, talk to NSA via an asymmetric cipher, or what?! I play music via sshfs, sometimes FLAC files. That barely consumed more CPU time then brltty, which is probably the most active daemon on my system, erm, it should be. I don't want to chime into any flamewars. I have accepted that we have systemd. But this does not look right! I remember, back in the good old days, emacs and brltty were my top CPU users.

23 March 2015

Mario Lang: Why is Qt5 not displaying Braille?

While evaluating the cross-platform accessibility of Qt5, I stumbled across this deficiency:
#include <QApplication>
#include <QTextEdit>
int main(int argv, char **args)
 
  QApplication app(argv, args);
  QTextEdit textEdit;
  textEdit.setText(u8"\u28FF");
  textEdit.show();
  return app.exec();
 
(compile with -std=c++11). On my system, this "application" does not show the correct glyph always. Sometimes, it renders a a white square with black border, i.e., the symbol for unknown glyph. However, if I invoke the same executable several times, sometimes, it renders the glyph correctly. In other words: The glyph choosing mechansim is apparently non-deterministic!!! UPDATE: Sune Vuorela figured out that I need to set QT_HARFBUZZ=old in the environment for this bug to go away. Apparently, harfbuzz-ng from Qt 5.3 is buggy.

18 March 2015

Mario Lang: Call for Help: BMC -- Braille Music Compiler

Since 2009, I am persuing a personal programming project. As I am not a professional programmer, I have spent quite a lot of that time exploring options. I have thrown out about three or four prototype implementations already. My last implementation seems to contain enough accumulated wisdom to be actually useful. I am far from finished, but the path I am walking now seems relatively sound. So, what is this project about? I have set myself a rather ambitious goal: I am trying to implement a two-way bridge between visual music notation and braille music code. It is called BMC (Braille Music Compiler). My problem: I am, as some of you might remember, 100% blind. So I am trying to write a translator between something I will never see directly, and its counterpart representation in a tactile encoding I had to learn from scratch to be able to work on this project. Braille music code is probably the most cryptic thing I have ever tried to learn. It basically is a method to represent a 2-dimensional structure like staff-notation as a stream of characters encoded in 6-dot braille. As the goal above states, I am ultimately trying to implement a converter that works both ways. One of my prototypes already implemented reading digital staff notation (MusicXML) and transcribing it to Braille. However, to be able to actually understand all the concepts involved, I ended up starting from the other end of the spectrum with my new implementation: parsing braille music code and emitting digital staff notation (LilyPond and MusicXML). This is a rather unique feature, since while there is commercial (and very expensive) software out there to convert MusicXML to braille music code, there is, as far as I know, no system that allows to input un-annotated braille music code and have it automatically converted to sighted music notation. So the current state of things is, that we are able to read certain braille music code formats, and output either reformatted (to new line-width) braille music code, LilyPond or MusicXML. The ultimate goal is to also implement a MusicXML reader, and convert the data to something that can be output as braille music code. While the initial description might not sound very hard, there are a lot of complications arising from how braille music code works, which make this quite a programming challenge. For one, braille music note and rest values are ambigious. A braille music note or rest that looks like a whole can mean a whole or 16th. A braille music note or rest that looks like a half can mean a half or a 32nd. And so on. So each braille music code value can have two meanings. The actual value can be caluclated with a recursive algorithm that I have worked out from scratch over the years. The original implementation was inspired by Samuel Thibault (thanks!) and has since then evolved into something that does what we need, while trying to do that very fast. Most input documents can be processed in almost no time, however, time signatures with a value > 1 (such as 12/8) tend to make the number of possible choices exploed quite heavily. I have found so far one piece from J.S. Bach (BWV988 Variation 3) which takes about 1.5s on my 3GHz AMD (and the code is already using several CPU cores). Additionally, braille music code supports a form of "micro"-repetitions which are not present in visual staff notation which effectively allow certain musical patterns to be compressed if represented in braille. Another algorithmically interesting part of BMC that I have started to taclke just recently is the linebreaking problem. Braille music code has some peculiar rules when it comes to breaking a measure of musical material into several lines. I ended up adapting Donald E. Knuth's algorithm from Breaking Paragraphs into Lines for fixed-width text. In other words, I am ignoring the stretch/shrink factors, while making use of different penalty values to find the perfect solution for the problem of breaking a paragraph of braille music code into several lines. One thing that I have learnt from my perivous prototype (which was apparently useful enough to already acquire some users) is that it is not enough to just transcribe one format to another. I ultimately want to store meta information about the braille that is presented to the user such that I can implement interactive querying and editing features. Braille music code is complicated, and one of the original motivations to work on software to deal with it was to ease the learning curve. A user of BMC should be able to ask the system for a description of a character at a certain position. The user interface (not implemented yet) should allow to play a certain note interactively, or play the measure under the cursor, or play the whole document, and if possible, have the cursor scroll along while playback plays notes. These features are not implemented in BMC yet, but they have been impleemnted in the previous prototype and their usefulness is apparent. Also, when viewing a MusicXML document in braille music code, certain non-structural changes like adding/removing fingering annotations should be possible while preserving unhandled features of the original MusicXML document. This also has been implemented in the previous prototype, and is a goal for BMC.
I need your help The reason why I am explaining all of this here is that I need your help for this project to succeed. Helping the blind to more easily work with traditional music notation is a worthwhile goal to persue. There is no free system around that really tries to adhere to the braille music code standard, and aims to cover converting both ways. I have reached a level of conformance that surpasses every implementation of the same problem that I have seen so far on the net. However, the primary audience of this software is going to be using Windows. We desperately need a port to that OS, and a user interface resembling NotePad with a lot fewer menu entires. We also need a GTK interface that does the same thing on Linux. wxWindows is unfortunately out of question, since it does not provide the same level of Accessibility on all the platforms it supports. Ideally, we'd also have a Cocoa interface for OS X. I am afraid there is no platform independent GUI framework that offers the same level of Accessibility on all supported platforms. And since much of our audience is going to rely on working Accessibility, it looks like we need to implement three user interfaces to achieve this goal :-(. I also desperately need code reviews and inspiration from fellow programmers. BMC is a C++11 project heavily making use of Boost. If you are into one of these things, please give it a whirl, and emit pull requests, no matter how small they are. While I have learnt a lot in the last years, I am sure there are many places that could use some fresh winds of thought by people that are not me. I am suffering from what I call "the lone coder syndrome". I also need (technical) writers to help me complete the pieces of documentation that are already lying around. I have started to write a braille music tutorial based on the underlying capabilities of BMC. In other words, the tutorial includes examples which are being typeset in braille and staff notation, using LilyPond as a rendering engine. However, something like a user manual is missing, basically, because the user interface is missing. BMC is currently "just" a command-line tool (well enough for me) that transcribes input files to STDOUT. This is very good for testing the backend, which is all that has been important to me in the last years. However, BMC has reached a stage now where its functionality is likely useful enough to be exposed to users. While I try to improve things steadily as I can, I realize that I really need to put out this call for help to make any useful progress in a foreseeable time. If you think it is a worthwhile goal to help the blind to more easily work with music notation, and also enable communication between blind and sighted musicians in both ways, please take the time and consider how you could help this project to advance. My email address can be found on my GitHub page. Oh, and while you are over at GitHub, make sure to star BMC if you think it is a nice project. It would be nice if we could produce a end-user oriented release before the end of this year.

22 December 2014

Michael Prokop: Ten years of Grml

* On 22nd of October 2004 an event called OS04 took place in Seifenfabrik Graz/Austria and it marked the first official release of the Grml project. Grml was initially started by myself in 2003 I registered the domain on September 16, 2003 (so technically it would be 11 years already :)). It started with a boot-disk, first created by hand and then based on yard. On 4th of October 2004 we had a first presentation of grml 0.09 Codename Bughunter at Kunstlabor in Graz. I managed to talk a good friend and fellow student Martin Hecher into joining me. Soon after Michael Gebetsroither and Andreas Gredler joined and throughout the upcoming years further team members (Nico Golde, Daniel K. Gebhart, Mario Lang, Gerfried Fuchs, Matthias Kopfermann, Wolfgang Scheicher, Julius Plenz, Tobias Klauser, Marcel Wichern, Alexander Wirt, Timo Boettcher, Ulrich Dangel, Frank Terbeck, Alexander Steinb ck, Christian Hofstaedtler) and contributors (Hermann Thomas, Andreas Krennmair, Sven Guckes, Jogi Hofm ller, Moritz Augsburger, ) joined our efforts. Back in those days most efforts went into hardware detection, loading and setting up the according drivers and configurations, packaging software and fighting bugs with lots of reboots (working on our custom /linuxrc for the initrd wasn t always fun). Throughout the years virtualization became more broadly available, which is especially great for most of the testing you need to do when working on your own (meta) distribution. Once upon a time udev became available and solved most of the hardware detection issues for us. Nowadays X.org doesn t even need a xorg.conf file anymore (at least by default). We have to acknowledge that Linux grew up over the years quite a bit (and I m wondering how we ll look back at the systemd discussions in a few years). By having Debian Developers within the team we managed to push quite some work of us back to Debian (the distribution Grml was and still is based on), years before the Debian Derivatives initiative appeared. We never stopped contributing to Debian though and we also still benefit from the Debian Derivatives initiative, like sharing issues and ideas on DebConf meetings. On 28th of May 2009 I myself became an official Debian Developer. Over the years we moved from private self-hosted infrastructure to company-sponsored systems, migrated from Subversion (brr) to Mercurial (2006) to Git (2008). Our Zsh-related work became widely known as grml-zshrc. jenkins.grml.org managed to become a continuous integration/deployment/delivery home e.g. for the dpkg, fai, initramfs-tools, screen and zsh Debian packages. The underlying software for creating Debian packages in a CI/CD way became its own project known as jenkins-debian-glue in August 2011. In 2006 I started grml-debootstrap, which grew into a reliable method for installing plain Debian (nowadays even supporting installation as VM, and one of my customers does tens of deployments per day with grml-debootstrap in a fully automated fashion). So one of the biggest achievements of Grml is from my point of view that it managed to grow several active and successful sub-projects under its umbrella. Nowadays the Grml team consists of 3 Debian Developers Alexander Wirt (formorer), Evgeni Golov (Zhenech) and myself. We couldn t talk Frank Terbeck (ft) into becoming a DM/DD (yet?), but he s an active part of our Grml team nonetheless and does a terrific job with maintaining grml-zshrc as well as helping out in Debian s Zsh packaging (and being a Zsh upstream committer at the same time makes all of that even better :)). My personal conclusion for 10 years of Grml? Back in the days when I was a student Grml was my main personal pet and hobby. Grml grew into an open source project which wasn t known just in Graz/Austria, but especially throughout the German system administration scene. Since 2008 I m working self-employed and mainly working on open source stuff, so I m kind of living a dream, which I didn t even have when I started with Grml in 2003. Nowadays with running my own business and having my own family it s getting harder for me to consider it still a hobby though, instead it s more integrated and part of my business which I personally consider both good and bad at the same time (for various reasons). Thanks so much to anyone of you, who was (and possibly still is) part of the Grml journey! Let s hope for another 10 successful years! Thanks to Max Amanshauser and Christian Hofstaedtler for reading drafts of this.

18 December 2014

Mario Lang: deluXbreed #2 is out!

The third installment of my crossbreed digital mix podcast is out! This time, I am featuring Harder & Louder and tracks from Behind the Machine and the recently released Remixes.
  1. Apolloud - Nagazaki
  2. Apolloud - Hiroshima
  3. SA+AN - Darksiders
  4. Im Colapsed - Cleaning 8
  5. Micromakine & Switch Technique - Ascension
  6. Micromakine - Cyberman (Dither Remix)
  7. Micromakine - So Good! (Synapse Remix)
How was DarkCast born and how is it done? I always loved 175BPM music. It is an old thing that is not going away soon :-). I recently found that there is a quite active culture going on, at least on BandCamp. But single tracks are just that, not really fun to listen to in my opinion. This sort of music needs to be mixed to be fun. In the past, I used to have most tracks I like/love as vinyl, so I did some real-world vinyl mixing myself. But these days, most fun music is only available digitally, at least easily. Some people still do vinyl releases, but they are actually rare. So for my personal enjoyment, I started to digitally mix tracks I really love, such that I can listen to them without "interruption". And since I am an iOS user since three years now, using the podcast format to get stuff onto my devices was quite a natural choice. I use SoX and a very small shell script to create these mixes. Here is a pseudo-template:
sox --combine mix-power \
" sox \" sox 1.flac -p\" \" sox 3.flac -p speed 0.987 delay 2:28.31 2:28.31\" -p" \
" sox \" sox 2.flac -p delay 2:34.1 2:34.1\" -p" \
mix.flac
As you can imagine, it is quite a bit of fiddling to get these scripts to do what you want. But it is a non-graphical method to get things done. If you know of a better tool, possibly with a bit of real-time controls, to get the same job done, wihtout having to resort to a damn GUI, let me know.

14 December 2014

Mario Lang: Data-binding MusicXML

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub. I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy. During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand. If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API. For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt. Finally, if you want to see how this library can be put to use: The MusicXML export functionality of BMC is all in one C++ source file: musicxml.cpp.

24 October 2014

Stefano Zacchiroli: Italy puts Free Software first in public sector

Debian participation in Italy's CAD68 committee (The initial policy change discussed in this document is a couple of years old now, but it took about the same time to be fully implemented, and AFAIK the role Debian played in it has not been documented yet.) In October 2012 the Italian government, led at the time by Mario Monti, did something rather innovative, at least for a country that is not usually ahead of its time in the area of information technology legislation. They decided to change the main law (the "CAD", for Codice dell'Amministrazione Digitale) that regulates the acquisition of software at all levels of the public administration (PA), giving an explicit preference to the acquisition of Free Software. The new formulation of article 68 of the CAD first lists some macro criteria (e.g., TCO, adherence to open standards, security support, etc.) that public administrations in Italy shall use as ranking criteria in software-related calls for tenders. Then, and this is the most important part, the article affirms that the acquisition of proprietary software solutions is permitted only if it is impossible to choose Free Software solutions instead; or, alternatively, to choose software solutions that have already being acquired (and paid for) by the PA in the past, reusing preexisting software. The combined effect of these two provisions is that all new software acquisitions by PAs in Italy will be Free Software, unless it is motivated in writing, challengable by a judge that it was impossible to do otherwise. Isn't it great? It is, except that such a law is not necessarily easy to adhere to in practice, especially for small public administrations (e.g., municipalities of a few hundred people, not uncommon in Italy) which might have very little clue about software in general, and even less so about Free Software. This is why the government also tasked the relevant Italian agency to provide guidelines on how to choose software in a way that conforms with the new formulation of article 68. The agency decided to form a committee to work on the guidelines (because you always need a committee, right? :-) ). To my surprise, the call for participation to be part of the committee explicitly listed representatives of Free Software communities as privileged software stakeholders that they wanted to have on the committee kudos to the agency for that. (The Italian wording on the call was: Costituir titolo di preferenza rivestire un ruolo di [ ] referenti di community del software a codice sorgente aperto.) Therefore, after various prods by fellow European Free Software activists that were aware of the ongoing change in legislation, I applied to be a volunteer CAD68 committee member, got selected, and ended up working over a period of about 6 months (March-September 2013) to help the agency writing the new software acquisition guidelines. Logistically, it hasn't been entirely trivial, as the default meeting place was in Rome, I live in Paris, and the agency didn't really have a travel budget for committee members. That's why I've sought sponsorship from Debian, offering to represent Debian views within the committee; Lucas kindly agreed to my request. So what did I do on behalf of Debian as a committee member during those months? Most of my job has been some sort of consulting on how community-driven Free Software projects like Debian work, on how the software they produce can be relied upon and contributed to, and more generally on how the PA can productively interact with such projects. In particular, I've been happy to work on the related work section of the guidelines, ensuring they point to relevant documents such as the French government guidelines on how to adopt Free Software (AKA circulaire Ayrault). I've also drafted the guidelines section on Free Software directories, ensuring that important resources such as FSF's Free Software Directory are listed as starting points for PAs that looking for software solutions for specific needs. Another part of my job has been ensuring that the guidelines do not end up betraying the principle of Free Software preference that is embodied in article 68. A majority of committee members came from a Free Software background, so that might not seem a difficult goal to accomplish. But it is important to notice that: (a) the final editor of the guidelines is the agency itself, not the committee, so having a "pro-Free Software" majority within the committee doesn't mean much per se; and (b) lobbying from the "pro-proprietary software" camp did happen, as it is entirely natural in these cases. In this respect I'm happy with the result: I do believe that the software selection process recommended by the guidelines, finally published in January 2014, upholds the Free Software preference principle of article 68. I credit both the agency and the non-ambiguity of the law (on this specific point) for that result. All in all, this has been a positive experience for me. It has reaffirmed my belief that Debian is a respected, non-partisan political actor of the wider software/ICT ecosystem. This experience has also given me a chance to be part of country-level policy-making, which has been very instructive on how and why good ideas might take a while to come into effect and influence citizen lives. Speaking of which, I'm now looking forward to the first alleged violations of article 68 in Italy, and how they will be dealt with. Abundant popcorn will certainly be needed. Links & press If you want to know more about this topic, I've collected below links to resources that have documented, in various languages, the publication of the CAD68 guidelines.

12 October 2014

Mario Lang: soundCLI works again

I recently ranted about my frustration with GStreamer in a SoundCloud command-line client written in Ruby. Well, it turns out that there was quite a bit confusion going on. I still haven't figured out why my initial tries resulted in an error regarding $DISPLAY not being set. But now that I have played a bit with gst-launch-1.0, I can positively confirm that this was very likely not the fault of GStreamer. THe actual issue is, that ruby-gstreamer is assuming gstreamer-1.0, while soundCLI was still written against the gstreamer-0.10 API. Since the ruby gst module doesn't have the Gstreamer API version in its name, and since Ruby is a dynamic language that only detects most errors at runtime, this led to all sorts of cascaded errors. It turns out I only had to correct the use of query_position, query_duration, and get_state, as well as switching from playbin2 to playbin. soundCLI is now running in the background and playing my SoundCloud stream. A pull request against soundCLI has also been opened. On a somewhat related note, I found a GCC bug (ICE SIGSEGV) this weekend. My first one. It is related to C++11 bracketed initializers. Given that I have heard GCC 5.0 aims to remove the experimental nature of C++11 (and maybe also C++14), this seems like a good time to hit this one. I guess that means I should finally isolate the C++ regex (runtime) segfault I recently stumbled across.

10 October 2014

Mario Lang

GStreamer and the command-line? I was recently looking for a command-line client for SoundCloud. soundCLI on GitHub appeared to be what I want. But wait, there is a problem with its implementation. soundCLI uses gstreamer's playbin2 to play audio data. But that apparently requires $DISPLAY to be set. So no, soundCLI is not a command-line client. It is a command-line client for X11 users. Ahem. A bit of research on Stackoverflow and related sites did not tell me how to modify playbin2 usage such that it does not require X11, while it is only playing AUDIO data. What the HECK is going on here. Are the graphical people trying to silently overtake the world? Is Linux becoming the new Windows? The distinction between CLI and GUI has become more and more blurry in the recent years. I fear for my beloved platform. If you know how to patch soundCLI to not require X11, please let me know. My current work-around is to replace all gstreamer usage with a simple "system" call to vlc. That works, but it does not give me comment display (since soundCLI doesn't know the playback position anymore) and hangs after every track, requiring me to enter "quit" manually on the VLC prompt. I really would have liked to use mplayer2 for this, but alas, mplayer2 does not support https. Oh well, why would it need to, in this day and age where everyone seems to switch to https by default. Oh well.

30 September 2014

Mario Lang: A simple C++11 concurrent workqueue

For a little toy project of mine (a wikipedia XML dump word counter) I wrote a little C++11 helper class to distribute work to all available CPU cores. It took me many years to overcome my fear of threading: In the past, whenever I toyed with threaded code, I ended up having a lot of deadlocks, and generally being confused. It appears that I finally have understood enough of this crazyness to be able to come up with the small helper class below.
The problem We want to spread work amongst all available CPU cores. There are no dependencies between items in our work queue. So every thread can just pick up and process an item as soon as it is ready.
The solution This simple implementation makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining items are processed and all background threads are joined. The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don't know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc. The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block. So the most basic usage example is probably something like:
int main()  
  typedef std::string item_type;
  distributor<item_type> process([](item_type &item)  
    // do work
   );
  while (/* input */) process(std::move(/* item */));
  return 0;
 
That is about as simple as it can get, IMHO. The code can be found in the GitHub project mentioned above. However, since the class template is relatively short, here it is.
#include <condition_variable>
#include <mutex>
#include <queue>
#include <stdexcept>
#include <thread>
#include <vector>
template <typename Type, typename Queue = std::queue<Type>>
class distributor: Queue, std::mutex, std::condition_variable  
  typename Queue::size_type capacity;
  bool done = false;
  std::vector<std::thread> threads;
public:
  template<typename Function>
  distributor( Function function
             , unsigned int concurrency = std::thread::hardware_concurrency()
             , typename Queue::size_type max_items_per_thread = 1
             )
  : capacity concurrency * max_items_per_thread 
   
    if (not concurrency)
      throw std::invalid_argument("Concurrency must be non-zero");
    if (not max_items_per_thread)
      throw std::invalid_argument("Max items per thread must be non-zero");
    for (unsigned int count  0 ; count < concurrency; count += 1)
      threads.emplace_back(static_cast<void (distributor::*)(Function)>
                           (&distributor::consume), this, function);
   
  distributor(distributor &&) = default;
  distributor &operator=(distributor &&) = delete;
  ~distributor()
   
     
      std::lock_guard<std::mutex> guard(*this);
      done = true;
      notify_all();
     
    for (auto &&thread: threads) thread.join();
   
  void operator()(Type &&value)
   
    std::unique_lock<std::mutex> lock(*this);
    while (Queue::size() == capacity) wait(lock);
    Queue::push(std::forward<Type>(value));
    notify_one();
   
private:
  template <typename Function>
  void consume(Function process)
   
    std::unique_lock<std::mutex> lock(*this);
    while (true)  
      if (not Queue::empty())  
        Type item   std::move(Queue::front())  ;
        Queue::pop();
        notify_one();
        lock.unlock();
        process(item);
        lock.lock();
        else if (done)  
        break;
        else  
        wait(lock);
       
     
   
 ;
If you have any comments regarding the implementation, please drop me a mail.

2 September 2014

Mario Lang: exercism.io C++ track

exercism.io is a croud-sourced mentorship platform for learning to program. In my opinion, they do a lot of things right. In particular, an exercise on exercism.io consists of a descriptive README file and a set of test cases implemented in the target programming language. The tests have two positive sides: You learn to do test-driven development, which is good. And you also have an automated validation suite. Of course, a test can not give you feedback on your actual implementation, but at least it can give you an idea if you have managed to implement what was required of you. But that is not the end of it. Once you have submitted a solution to a particular exercise, other users of exercism.io can comment on your implementation. And you can, as soon as you have submitted the first implementation, look at the solutions that other people have submitted to that particular problem. So knowledge transfer can happen both ways from there on: You can learn new things from how other people have solved the same problem, and you can also tell other people about things they might have done in a different way. These comments are, somewhat appropriately, called nitpicks on exercism.io. Now, exercism has recently gained a C++ track. That track is particularily fun, because it is based on C++11, Boost, and CMake. Things that are quite standard to C++ development these days. And the use of C++11 and Boost makes some solutions really shine.

17 August 2014

Francesca Ciceri: Adventures in Mozillaland #4

Yet another update from my internship at Mozilla, as part of the OPW. An online triage workshop One of the most interesting thing I've done during the last weeks has been to held an online Bug Triage Workshop on the #testday channel at irc.mozilla.org.
That was a first time for me: I had been a moderator for a series of training sessions on IRC organized by Debian Women, but never a "speaker".
The experience turned out to be a good one: creating the material for the workshop had me basically summarize (not too much, I'm way too verbose!) all what I've learned in this past months about triaging in Mozilla, and speaking of it on IRC was a sort of challenge to my usual shyness. And I was so very lucky that a participant was able to reproduce the bug I picked as example, thus confirming it! How cool is that? ;) The workshop was about the very basics of triaging for Firefox, and we mostly focused on a simplified lifecycle of bugs, a guided tour of bugzilla (including the quicksearch and the advanced one, the list view, the individual bug view) and an explanation of the workflow of the triager. I still have my notes, and I plan to upload them to the wiki, sooner or later. I'm pretty satisfied of the outcome: the only regret is that the promoting wasn't enough, so we have few participants.
Will try to promote it better next time! :) about:crashes Another thing that had me quite busy in the last weeks was to learn more about crashes and stability in general.
If you are unfortunate enough to experience a crash with Firefox, you're probably familiar with the Mozilla Crash Reporter dialog box asking you to submit the crash report. But how does it works? From the client-side, Mozilla uses Breakpad as set of libraries for crash reporting. The Mozilla specific implementation adds to that a crash-reporting UI, a server to collect and process crash reported data (and particularly to convert raw dumps into readable stack traces) and a web interface, Socorro to view and parse crash reports. Curious about your crashes? The about:crashes page will show you a list of the submitted and unsubmitted crash reports. (And by the way, try to type about:about in the location bar, to find all the super-secret about pages!) For the submitted ones clicking on the CrashID will take you to the crash report on crash-stats, the website where the reports are stored and analyzed. The individual crash report page on crash-stats is awesome: it shows you the reported bug numbers if any bug summaries match the crash signature, as well as many other information. If crash-stats does not show a bug number, you really should file one! The CrashKill team works on these reports tracking the general stability of the various channels, triaging the top crashes, ensuring that the crash bugs have enough information and are reproducible and actionable by the devs.
The crash-stats site is a mine of information: take a look at the Top Crashes for Firefox 34.0a1.
If you click on a individual crash, you will see lots of details about it: just on the first tab ("Signature Summary") you can find a breakdown of the crashes by OS, by graphic vendors or chips or even by uptime range.
A very useful one is the number of crashes per install, so that you know how widespread is the crashing for that particular signature. You can also check the comments the users have submitted with the crash report, on the "Comments" tab. One and Done tasks review Last week I helped the awesome group of One and Done developers, doing some reviewing of the tasks pages. One and Done is a brilliant idea to help people contribute to the QA Mozilla teams.
It's a website proposing the user a series of tasks of different difficulty and on different topics to contribute to Mozilla. Each task is self-contained and can last few minutes or be a bit more challenging. The team has worked hard on developing it and they have definitely done an awesome job! :) I'm not a coding person, so I just know that they're using Django for it, but if you are interested in all the dirty details take a look at the project repository. My job has been only to check all the existent tasks and verify that the description and instruction are correct, that the task is properly tagged and so on. My impression is that this an awesome tool, well written and well thought with a lot of potential for helping people in their first steps into Mozilla. Something that other projects should definitely imitate (cough Debian cough). What's next? Next week I'll be back on working on bugs. I kind of love bugs, I have to admit it. And not squashing them: not being a coder make me less of a violent person toward digital insects. Herding them is enough for me. I'm feeling extremely non-violent toward bugs. I'll try to help Liz with the Test Plan for Firefox 34, on the triaging/verifying bugs part.
I'll also try to triage/reproduce some accessibility bugs (thanks Mario for the suggestion!).

18 July 2014

Mario Lang: Croudsourced accessibility: Self-made digital menus

Something straight out from the real world: Menu cards in restaurants are not nice to deal with if you are blind. It is an old problem we grow used to ignoring over time, but still something that can be quite nagging. There are a lot of psychological issues involved in this one. Of course, you can ask for the menu to be read out to you by the staff. While they usually do their best, you end up missing out on some things most of the time. First of all, depending on the current workload in the restaurant, the staff will usually try to cut some time and not read everything to you. What they usually do is to try to understand what type of meal you are interested in, and just read the choices from that category to you. While this can be considered a service in some situations (human preprocessing), there are situations were you will definitely miss a highlight on the menu that you would have liked to choose if you knew that it was there. And even if the staff decides to read the complete menu to you (which is rare), you are confronted with the 7-things-in-my-head-at-once problem. It is usually rather hard to decide amongst a list of more then 7 items, because our short-term memory is sort of limited. What the sighted restaurant goers do, is to skip back and forth between the available options, until they hit a decisive moment. True, that can take a while, but it is definitely a lot easier if you can perform "random access reads" to the list of choices yourself. However, if someone presents a substantial number of choices to you in a row, as sequential speech, you loose the random access ability. You either remember every choice from the beginning and do your choosing mentaully (if you do have extraordinary mental abilities), or you end up asking the staff to read previous items aloud again. This can work, but usually it doesn't. At some point, you do not want to bother the staff anymore, and you even start to feel stupid for asking again and again, while this is something totally normal to every sighted person, just that "they" do their "random access browsing" on their own, so "they" have no need to feel bad about how long it takes them to decide, minus the typical social pressure that arises after a a certain time for everyone, at least if you are dining in a group. In very rare cases, you happen to meet staff that is truly "awake", doing their best to not let you feel that they might be pressed on time, and really taking as much time as necessary to help you make the perfect decision. This is rare, but if it happens, it is almost a magical moment. One of these moments, where there are no "artificial" barriers between humans doing communcation. Anyway, I am drifting away. The perfect solution to this problem is to provide random access browsing of a restaurant menu with the help of digital devices. Trying to make braille menus available in all restaurants is a goal which is not realistically reachable. Menus go out of date, and need changing. And getting a physical braille copy updated and reprinted is considerably more involved as with digital media. Restaurant owners will also likely not see the benefit to rpvide a braille card for a very small circle of customers. With a digital online menu, that is a completely different story. These days, almost every blind person in at least my social circles owns an iOS (or similar) device. These devices have speech synthesis and web browsers. Of course, some restaurants especially in urban areas do already have a menu online. I have found them manually with google and friends sometimes in the past, which has already given me the ability to actually sit back, and really comfortably choose amongst the available offerings myself, without having to bother a human, and without having to feel bad about (ab)using their time. However, the case where a restaurant really has their menu online is rather rare still in the area where I am. And, it can be tedious to google for a restaurant website. Sometimes, the website itself is just marginally accessible, which makes it even more frustrating to get a relaxed dinner-experience. I have discovered a location-based solution for the restaurant-menu problem recently. Foursquare offers the ability to provide a direct link to the menu in a restaurant-entry. I figured, since all you need to do is write a single webpage where the (common) menu items are listed per restaurant, that I could begin to create restaurant menus for my favourite locations, on my own. Well, not quite, but almost. I will sometimes need help from others to get the menu digitized, but that's just a one-time piece of work I hopefully can outsource :-). Once the actual content is in my INBUX, I create a nice HTML page listing the menu in a rather speech-based browser friendly way. I have begun to do this today, with the menu of a restaurant just about 500 meters away from my apartment. Unterm goldenen Dachl now has a menu online, and the foursquare change request to publish the corresponsing URL is already pending. I don't fully understand how the Foursquare change review process works yet, but I hope the URL should be published in the upcoming days/weeks. I am using Foursquare because it is the backend of a rather popular mobile navigation App for blind people, called Blindsquare. Blindsquare lets you comfortably use Open Street Map and Foursquare data to get an overview of your surroundingds. If a food place has a menu entry listed in Foursquare, Blindsquare conveniently shows it to you and opens a browser if you tap it. So there is no need to actually search for the restaurant, you can just use the location based search of Blindsquare to discover the restaurant entry and its menu link directly from within Blindsquare. Actually, you could even find a restaurant by accident, and with a little luck, find the menu for it by location, without even knowing how the restaurant is called. Isn't that neat? Yeah, that's how it is supposed to work, that's as much independence as you can get. And, it is, as the title suggests, croudsourced accessibility. Becuase while it is nice if a restaurant owner cares to publish their menu themselves, if they haven't, you can do it yourself. Either as a user of assistive technologies, to scratch your own itch. Or as a friend of a person with a need for assistive technologies. Next time you go to lunch with your blind friend, consider making available the menu to them digitally in advance, instead of reading it. Other people will likely thank you for that, and you have actually achieved something today. And if you happne to put a menu online, make sure to submit a change request to Foursquare. Many blind people are using blindsquare these days, which makes it super-easy for them to discover the menu.

15 July 2014

Mario Lang

Mixing vinyl again The turntables have me back, after quite some long-term mixing break. I used to do straight 4-to-the-floor, mostly acid or hardtek. You can find an old mix of mine on SoundCloud. This one is actually back from 2006. But currently I am more into drum and bass. It is an interesting mixing experience, since considerably harder. Here is a small but very recent minimix. Experts in the genre might notice that I am mostly spinning stuff from BlackOutMusicNL, admittedly my favourite label right now.

5 July 2014

Mario Lang: I love my MacBookAir with Debian

In short: I love my MacBook Air. It is the best (laptop) hardware I ever owned. I have seen hardware which was much more flaky in the past. I can set the display backlight to zero via software, which saves me a lot of battery life and also offers a bit of anti-spy-acroos-my-shoulder support. WLAN and bluetooth work nicely. And I just love the form-factor and the touch-feeling of the hardware. I even had the bag I use to carry my braille display modified so that the Air just fits in. I can't say how it behaves with X11. Given how flaky accessibility with graphical desktops on Linux is, I have still not made the switch. My MacBookAir is my perfect mobile terminal, I LOVE it. I am sort of surprised about the recent rant of Paul about MacBook Hardware. It is rather funny that we perceive the same technology so radically different. And after reading the second part of his rant I am wondering if I am no longer allowed to consider myself part of the "hardcore F/OSS world", because I don't consider Apple as evil as apparently most others. Why? Well, first of all, I actually like the hardware. Secondly, you have to show me a vendor first that builds usable accessibility into their products, and I mean all their products, without any extra price-tag attached. Once the others start to consider people with disabilities, we can talk about apple-bashing again. But until then, sorry, you don't see the picture as I do. Apple was the first big company on the market to take accessibility seriously. And they are still unbeaten, at least when it comes to bells and whistles included. I can unbox and configure any Apple product sold currently completely without assisstance. With some products, you just need to know a signle keypress (tripple-press the home button for touch devices and Cmd+F5 for Mac OS/X), and with others, during initial bootup, a speech synthesizer even tells you how to enable accessibility in case you need it. And after that is enabled, I can perform the setup of the device completely on my own. I don't need help from anyone else. And after the setup is complete, I can use 95% of the functionality provided by the operating system. And I am blind, part of a very small margin group so to speak. In Debian circles, I have even heard the sentiment that we supposedly have to accept that small margin groups are ignored sometimes. Well, as long as we think that way, as long as we strictly think economically, we will never be able to go there, fully. And we will never be the universal operating system, actually. Sorry to say that, but I think there is some truth to it. So, who is evil? Scratch your own itch doesn't always work to cover everything. How do we motivate contributors to work on things they don't personally need (yet)? How can we ensure that complicated but seldomly used features stay stable and do not fall to dust just because some upstream decides to rewrite an essential subcomponent of the dependency tree? I don't know. All I know is that these issues need to be solved in an universal operating system.

25 June 2014

Mario Lang: Four new packages on the GNU Emacs Package Archive (ELPA)

I have begun to push some of the Emacs Lisp Packages I have been working on over the last years to GNU ELPA, the Emacs Lisp Package Archive. That means you can use "M-x list-packages RET" to install them in GNU Emacs 24.
OpenSound Control library In 2007, I wrote OSC server and client support for Emacs. I used it back then to communicate with SuperCollider and related software. osc.el is a rather simple package with no user visible functionality, as it only provides a library for Emacs Lisp programmers. It is probably most interesting to people wanting to remote-control (modern) sound related software from with Emacs Lisp.
Texas hold'em poker As my interest in poker has recently sparked again, one thing led to another, so I began to write a poker library for GNU Emacs. It was a very fun experience. Version 0.1 of poker.el can simulate a table of ten players. Bots do make their own decisions, although the bot code is very simple. The complete game is currently played in the minibuffer. So there is definitely room for user interface enhancements, such as a poker table mode for displaying a table in a buffer.
Weather information from weather.noaa.gov I started to write metar.el in 2007 as well, but never really finished it to a releaseable state. I use it personally rather often, but never cleaned it up for a release. This has changed. It plugs in with other GNU Emacs features that make use of your current location. In particular, "M-x sunrise-sunset" and "M-x phases-of-moon" use the same variables (calendar-latitude and calendar-longitude) to determine where you are. "M-x metar" will determine the nearest airport weather station and display the weather information provided by that station.
Chess Finally, after many many years of development separated by uncountable amounts of hiatus, chess.el is now out as version 2.0.3! For a more detailed article about chess.el, see here.

Next.

Previous.